By lying, we deny others our view of the world. And our dishonesty not only influences the choices they make, it often determines the choices they can make—in ways we cannot always predict. Every lie is an assault on the autonomy of those we lie to.

Sam Harris

Lying

During my interviews with suspects who have decided to speak to me, and have answered my questions by providing an account in contrast to those provided by a complainant or witness, I often offer an opportunity for them to provide an explanation for the existence of those accounts. I invariably explain that if the statements given against them are untrue, they must be so for one of only two reasons: mistake or mendacity. Either the complainant has reported what they erroneously believe to be the truth of what took place, or they have consciously lied about what happened. When descriptions of a particular event align on the pertinent points, and they are provided by individuals who do not know one another, the implausibility of explaining their alignment by popular delusion or necessarily conspiracy—by invoking mistake and mendacity respectively—is compounded by an order of magnitude. If this is the case I often make an objective suggestion that the evidence appears to support a more reasonable explanation: that the accounts are in fact true.

The popular pronouncements that “terrorism has no religion” and that “Islam is a religion of peace” are not true. I have written on this point in some detail and will rely on my previous contributions to this debate so as not to spend more time on it here. (Though I will ask those on the other side to consider one question: What evidence, if presented, would change your mind?) Instead, I wish here to dwell on a topic to which I have only alluded in past posts. Namely, whether there is (at most) virtue or (at least) justification to be found in the practice of deliberately misleading others by making such pronouncements. Some who make them, for some reasons worse than others, do so sincerely and invariably from a position of ignorance, and though there are degrees to which such ignorance can be wilful, my quarrel is not with them (though I would invite them to read more on the subject). For it appears to me to be a more calculated and thus more culpable action, to deliberately tell untruths in this discourse as a means toward some apparently worthwhile end of greater utility.

Utilitarianism is a consequentialist theory of morality. That is to say the consequences, as opposed to the intentions of the action in question, constitute the pertinent variable by which an action may be judged as good or otherwise. This philosophical theory arose, as most others do given that the discipline is a dialectic one, in response to another. Emmanuel Kant had advanced his version of Deontology (himself responding to an absence of a secular and thus universal theory morality), the thrust of which supposes the existence of universal principles from which no diversion is justified. He argued that morality, like mathematics, could be known and understood by pure reason. Modern utilitarianism, as founded by the work of Jeremy Bentham and John Stuart Mill, holds that actions should be judged based upon the happiness or pleasure which they produce. A well-known challenge to this theory is elucidated by the “Transplant Surgeon” thought experiment. Briefly, one is asked to consider the actions of a surgeon who, accurately determining that the healthy patient before her is a viable match for five of her other patients, each of whom would surely die within a matter of weeks but for the successful transplant of a heart, liver, pancreas, lung and kidney respectively; decides to kill the healthy patient in order to save the lives of the dying five. The fact is that whatever happiness or pleasure is eradicated on the part of the healthy patient, it is outweighed five-fold by that preserved and maximised on the part of those requiring the transplant. A strict utilitarian could do little but judge the surgeon’s actions as being good.

It was due to challenges such as the above which gave rise to a new theory of utilitarianism, one that proposed that there might be general rules which achieved more utility in aggregate, and thus dealt with the scenario of the surgeon by suggesting that there existed a rule that killing healthy patients in order to harvest their organs would yield less overall utility for a given society, as it would surely generate less pleasure to fear a visit to one’s GP apprehending a risk that one might not emerge from it. (In The God Delusion, Richard Dawkins recounts this example and the ubiquitous adversity to the surgeon’s hypothetical action by citing Kant, pointing out that he “famously articulated the principle that a rational being should never be used as merely an unconsenting means to an end, even the end of benefiting others.”) Thus a distinction was birthed between the classical Act Utilitarianism and the new Rule Utilitarianism. The key point was to address an issue of myopia, and to submit that there is an obligation to both lengthen and widen one’s field of moral vision so as to account for the effects of an action upon the future, and society more generally. I believe that arguments for the utility of lying about the contributing factors of terrorism, even when addressed most charitably, fail also in their myopia. But before we turn to discuss this directly, let us consider some other examples of this tactic’s employment.

I wrote some time ago in defence of freedom of expression, arguing specifically that speech (when it is merely speech) should be protected from suppression by acts of violence. The strongest argument against my position, though I do not accept it, stipulates that Nazis are so evil, and what they say is so corrosive, that to allow them to speak would lead to acts of such evil, and the society which allows it would consequently be in part liable for it. This is a utilitarian argument which, in my submission, again falls down in its short-sightedness by failing to account for the wider societal implication for individuals who ought to be quite justifiably concerned about being physically attacked with apparent legitimacy, for the expression of an opinion. This example also prompts consideration for a linked phenomenon which perpetuates the problem: the fact that words such as ‘Nazi’ are now so overused that they have become practically meaningless. Freedom of expression is a principle. It is one by which we can preserve a contingency for error-correction. To make exceptions based on a perceived opportunity for greater utility is to be Theseus and reject Ariadne’s prudent offer of a spool of thread by which to escape the labyrinth. Our history is one of learning from errors, and it seems to me rather fundamental to our future prosperity to keep the measures available for detecting and correcting those errors.

In the same vein, there is an unsettling prevalence towards what can only be described as slander. It seems to have become an acceptable tactic in political discourse to simply attach a label to an opponent, in the hope that it will be fatal to their reputation. Sam Harris has spoken about his first-hand experiences with this on several occasions. In conversation with Kyle Kulinski he struggled to convince his host of the distinction between what had been framed as “harsh criticism” and a clearly dishonest misrepresentation of his positions on matters such as torture by Glenn Greenwald and others. During his podcast episode featuring Neil Degrasse Tyson, the physicist suggested that Harris had some responsibility as a writer and commentator to foresee how his words might be deliberately lifted out of context to nefarious ends, and to select his wordings accordingly. I believe that there are very modern reasons for the prevalence of this practice, and for the absence of sufficient social penalties against it, but to reach them a digression is warranted.

I am currently reading Robert Wright’s The Moral Animal, a beautifully-written examination of the moral enlightenment available from understanding natural selection, and the lessons of evolutionary psychology. The book is ingeniously structured with punctuating examples from Charles Darwin’s life, as gleaned from his many correspondences and other sources which are never dull (despite his incessant self-reproach in believing his letters to be so) and are on occasion very moving. (Writing in the memorial to his daughter, Annie, who died at the age of ten: “Her joyousness and animal spirits radiated from her whole countenance, and rendered every movement elastic and full of life and vigour. It was delightful and cheerful to behold her. Her dear face now rises before me, as she used sometimes to come running downstairs with a stolen pinch of snuff for me, her whole form radiant with the pleasure of giving pleasure…”) It has also been a pleasure to find within the book a retelling of Robert Axelrod’s computer-world experiments in game theory.

Though the implications for the evolutionary theory of ethics would be profound, Axelrod’s interest was not initially natural selection. He devised a computer game in which competing programmes would meet one another and play simplified versions of the Prisoners’ Dilemma when they interacted. Upon meeting, programmes would have the option of how to interact with each other; they could either cooperate or defect. Each option would determine the yield or loss points for each programme, but the number of points depended not only on what action the acting programme took, it also depended on what action its partner or opponent took. To mutually cooperate would mean small spoils for both; to mutually defect would mean small losses for both; but an asymmetry in tactics would mean big wins and big losses respectively. Each programme would interact with each other programme a limited number of times, with the crucial code allowing them to remember previous encounters. Contributors were invited to write strategic programmes to enter the simulation and interact with the other entrants.

Wright writes: “After every program had had 200 encounters with every other program, Axelrod added up their scores and declared a winner. Then he held a second generation of competition after a systematic culling: each program was represented in proportion to its first-generation success; the fittest had survived.” The winning program was named TIT FOR TAT, and its strategy was literally that. It would cooperate in every first encounter with another programme, and would reciprocate the last action of the other programme from then on. It would defect against those which had defected on their last meeting, and cooperate with those which had cooperated. (I have fond memories of discussing the tit-for-tat strategy while learning about Social Contract theory at A-level, and particularly the anomaly of the “last move” scenario, where it is known in advance that there will be no repercussions for a defection. On occasion, I tell colleagues with approaching leaving dos about Axelrod’s simulation, and I invite them to see their farewell drink as an opportunity for a “last move”, and to accordingly tell everyone what they really think of them.)

It is of note that TIT FOR TAT was the shortest and simplest programme submitted, with a code only five lines long. Wright stipulates that “if the strategies had been created by random computer mutation, rather than by design, it probably would have been among the first to appear.” He then goes on to make compelling arguments to suggest that reciprocal altruism was naturally selected, along with our appreciation for kind deeds and our indignation for cheats, liars and freeloaders. But we must keep at the forefront of our minds when we consider such theories, that our psychologies evolved without our modern culture in mind. As Write explains: “What the theory of natural selection says . . . is that people’s minds were designed to maximize fitness in the environment in which those minds evolved. This environment is known as the EEA—the environment of evolutionary adaptation. Or, more memorably the ‘ancestral environment.’ [Original emphasis.]”

The ancestral environment would have been one where interactions between agents were personal and in person. While lying would have been easy, lying without detection would have been a trifle more difficult. The reputational implications for this fact are noted by Peter Singer in The Expanding Circle: “If I help you, but you do not help me, I can of course cease to help you in the future. If I can talk, however, I can do more. I can tell everyone else in the group what sort of person you are. They may then also be less likely to help you in future.” Social media, in part due to the available anonymity on the internet, acts as a veil between one’s words and their consequences. It diminishes the value of truth on two counts: first, it impedes others in their ability to detect one’s defection; and second, it (consequently) dulls one’s perception of the results of one’s own transgression. As Wright lucidly points out: “When we pass a homeless person, we may feel uncomfortable about failing to help. But what really gets the conscience twinging is making eye contact and still failing to help.” The perpetual remoteness with which we interact with one another leaves us ill-equipped to morally capitalise upon the psychological tools with which natural selection has endowed us. I believe that this factor has played a large and underappreciated role in the increasing tolerance for partisan untruth which we now face.

As for Islam, it seems clear to me that while it is a leading one, a utilitarian should have more measures for utility than simply: fewer terrorist attacks. If this was the only desired end, a simple means by which it could be achieved might well be to simply submit to the jihadists and force everyone to convert to their particular sect within the religion (just as a poorly-programmed super AI might deal with an uncomplicated instruction to end all human ailments by simply ending all humans). Again, utilitarianism on this point fails due to its myopia. If we declare that we will accept lies about Islam, so long as we are better off for them, we have no mechanism by which we can tell what future lies about what future topics would be to our detriment. By the very nature of the situation, we simply couldn’t know. It would be a shameful thing to embark so unwittingly down this one-way street, and a truly wicked thing to partake in the mendacity which sees us off.

I am sceptical of the prospect of an absolute moral realism, if only because of the omnipresence of the evidence that this universe was not designed with us in mind. There have been some happy circumstances along our evolutionary history which have led us to a point when we can have such things as equality between the sexes, and a universality of human rights. But it seems like a stretch to imagine that every dilemma can have a right answer from some transcendental and universal morality. But if there exists such a principle which we should treat as though it was transcendental, eternal and universal independently of our existence, it must surely be the simplest of all: the value of truth.